Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Cross-layer fusion feature based on richer convolutional features for edge detection
SONG Jie, YU Yu, LUO Qifeng
Journal of Computer Applications    2020, 40 (7): 2053-2058.   DOI: 10.11772/j.issn.1001-9081.2019112057
Abstract391)      PDF (1496KB)(428)       Save
Aiming at the problems such as chaotic and fuzzy edge lines caused by current deep learning based edge detection technology, an end-to-end Cross-layer Fusion Feature for edge detection (CFF) model based on RCF (Richer Convolutional Features) was proposed. In this model, RCF was used as a baseline, the CBAM (Convolutional Block Attention Module) was added to the backbone network, translation-invariant downsampling technology was adopted, and some downsampling operations in the backbone network were removed in order to preserve the image details information, dilated convolution technique was used to increase the model receptive field at the same time. In addition, the method of cross-layer fusion of feature maps was adopted to enable high-level and low-level features to be fully fused together. In order to balance the relationship between the loss in each stage and the fusion loss, and to avoid the phenomenon of excessive loss of low-level details after multi-scale feature fusion, the weight parameters were added to the losses. The model was trained on Berkeley Segmentation Data Set (BSDS500) and PASCAL VOL Context dataset, and the image pyramid technology was used in testing to improve the quality of edge images. Experimental results show that the contour extracted by CFF model is clearer than that extracted by the baseline network and can solve the edge blurring problem. The evaluation performed on the BSDS500 benchmark shows that, the Optimal Dataset Scale (ODS) and the Optimal Image Scale (OIS) are improved to 0.818 and 0.839 respectively by this model.
Reference | Related Articles | Metrics
Variable intuitionistic fuzzy multi-granulation rough set model and its approximate distribution reduction algorithms
WAN Zhichao, SONG Jie, SHENG Yongliang
Journal of Computer Applications    2018, 38 (2): 390-398.   DOI: 10.11772/j.issn.1001-9081.2017071894
Abstract414)      PDF (1241KB)(372)       Save
In order to obtain a better approximate approximation effect in multi-granulation rough set model for target conception, an intuitionistic fuzzy rough set and a multi-granulation rough set were combined together and a model of intuitionistic fuzzy multi-granulation rough set was proposed. Due to the loose defect of the target approximation of the model, a variable intuitionistic fuzzy multi-granulation rough set model was proposed by introducing parameters to improve the proposed model, and the validity of this model was proved. In addition, on the basis of this model, a corresponding approximate distribution reduction algorithm was also proposed. The simulation results show that, compared with the existing fuzzy multi-granulation decision-theoretic rough set and multi-granulation double-quantitative decision-theoretic rough set, the proposed lower approximation distribution reduction algorithm has 2 to 4 attributes more than that of them, and the proposed upper approximate distribution reduction algorithm has 1 to 5 attributes less than that of them; meanwhile, the approximation accuracy of reduction results is more reasonable and superior. Theoretical analysis and experimental results verify that the proposed variable intuitionistic fuzzy multi-granulation rough set model has higher superiority in terms of approximating approximation and reducing dimensions.
Reference | Related Articles | Metrics
Probery: probability-based data query system for big data
WU Jinbo, SONG Jie, ZHANG Li, BAO Yubin
Journal of Computer Applications    2016, 36 (1): 8-12.   DOI: 10.11772/j.issn.1001-9081.2016.01.0008
Abstract696)      PDF (802KB)(424)       Save
Since the time consumption of full-result query for big data is excessively high, the system Probery was proposed. Different from traditional approximate query, Probery adopted an approximate full-result query method, an original method to query data. The approximation of Probery referred to the probability of containing all data satisfying query conditions in query results. Firstly, Probery divided the data stored in system into multiple data segments. Secondly, Probery placed the data in Distributed File System (DFS) according to the probability placing model. Finally, given a query condition, Probery adopted a heuristic query method to query data probably. The performance of query data was shown by comparing with other dominated non-relational data management system, in the case that the completeness of result set lost by 8%. The query time consumption of Probery was saved by 51% compared with HBase, by 23% compared with Cassandra, by 12% compared with MongoDB, by 3% compared with Hive. The experimental results show that Probery improves the performance of query data when the completeness of query data losses appropriately. In addition, Probery has better generality, adaptability and extensibility for big data query.
Reference | Related Articles | Metrics
New NAND device management solution with high storage density
WEI Bing GUO Yutang SONG Jie ZHANG Lei
Journal of Computer Applications    2014, 34 (8): 2434-2437.   DOI: 10.11772/j.issn.1001-9081.2014.08.2434
Abstract248)      PDF (619KB)(307)       Save

Focused on the problem of low storage density in embedded systems, in this paper, a new NAND device management solution with high storage density was proposed. In the proposed solution, a generalized mode of information structure in NAND page was designed by researching a great number of NAND storage structures and BCH(Bose-Chaudhuri-Hocquenghem) parity coding programs. In the mode, data layout in Out of Band (OOB) could achieve Error Correcting Code (ECC) capability while accommodating device management information of partition, thus the main page could be completely used for data storage, which can be treated as a basis for development of device read-write solution and Wear Leveling mechanism. The experimental results show that the proposed solution improves storage density up to 98%, and it is superior to most current common file systems. Having an excellent data storage density, as well as relatively stable device read-write efficiency and Program/Erase (P/E) endurance, the solution has good application advantages in embedded systems.

Reference | Related Articles | Metrics